Atlas Data Challenge Production on Grid3
نویسندگان
چکیده
We describe the design and operational experience of the ATLAS production system as implemented for execution on Grid3 resources. The execution environment consisted of a number of grid-based tools: Pacman for installation of VDT-based Grid3 services and ATLAS software releases, the Capone execution service built from the Chimera/Pegasus virtual data system for directed acyclic graph (DAG) generation, DAGMan/Condor-G for job submission and management, and the Windmill production supervisor which provides the Jabber messaging system for Capone interactions with the ATLAS production database at CERN. Datasets produced on Grid3 were registered into a distributed replica location service (Globus RLS) that was integrated with the Don Quijote proxy service for interoperability with other Grids used by ATLAS. We discuss performance, scalability, and fault handling during the first phase of ATLAS Data Challenge 2.
منابع مشابه
The Grid2003 Production Grid: Principles and Practice
The Grid2003 Project has deployed a multi-VO, application-driven grid laboratory (“Grid3”) that has sustained for several months the production-level services required by the physics experiments of the Large Hadron Collider at CERN (ATLAS and CMS), the Sloan Digital Sky Survey project, the gravitational wave search experiment LIGO, the BTeV experiment at Fermilab, as well as applications in mol...
متن کاملATLAS Data Challenge 1
The ATLAS Collaboration at CERN is preparing for the future data taking and analysis at LHC that will start in 2007. To validate its Computing Model, its complete software suite, its data model, and to ensure the correctness of the technical choices it has been decided to run series of so -called Data Challenges. In 2002 the main goals of these Data Challenges were the preparation and the deplo...
متن کاملPrototyping Virtual Data Technologies in ATLAS Data Challenge 1 Production
A worldwide computing model, embracing a global data and computation infrastructure, is emerging to answer the LHC computing challenges. A significant fraction of the ATLAS Data Challenge 1 (DC1) was performed in a Grid environment. For efficiency of the large production tasks distributed worldwide, it is essential to provide shared production management tools comprised of integratable and inte...
متن کاملThe Higgs boson machine learning challenge
The Higgs Boson Machine Learning Challenge (HiggsML or the Challenge for short) was organized to promote collaboration between high energy physicists and data scientists. The ATLAS experiment at CERN provided simulated data that has been used by physicists in a search for the Higgs boson. The Challenge was organized by a small group of ATLAS physicists and data scientists. It was hosted by Kagg...
متن کاملly Triggering on hadronic Tau Decays in ATLAS : 1 Algorithms and Performance
Hadronic tau decays play a crucial role in Standard Model measurements as well 6 as in the search for physics beyond the Standard Model by the ATLAS experiment at the Large 7 Hadron Collider (LHC). However, hadronic tau decays are difficult to identify and trigger on due 8 to their resemblance to QCD jets. Given the large production cross section of QCD processes, 9 designing and operating a tr...
متن کامل